Image selection region stretching to rectagle - c++

Currently working on one issue, which is illustrated on represented image.
On the left hand side source image is represented. I have selection region, which could be a polygon of 4 points.
On the right hand side result of image cutting is represented. As it can be seen pixels appeared in selection region were stretched to rectagle of resulting image.
I would like to know how to get such effect by using regular Qt or OpenCV?

The process could be performed with Qt using the following functions:
QTransform::squareToQuad: create the transformation matrix
Creates a transformation matrix, trans, that maps a unit square to a four-sided polygon, quad. Returns true if the transformation is constructed or false if such a transformation does not exist.
QImage::transformed: to transform the image with the constructed transformation matrix
Returns a copy of the image that is transformed using the given transformation matrix and transformation mode.
QImage::copy: extract the desired area
Returns a sub-area of the image as a new image.
Please try to read the docs and consider posting your solution when it works.

Method, decribed by m7913d, may to be useful.
I tried to implement it, but, unfortunately, i wasn`t able to get good results (maybe because of mistakes in coordinates specifying).
I also found similar methods in OpenCV.
And as it was more simple api to use (in my case) i wrote following code:
Mat src_img = imread(path.toStdString(), 1);
imshow("source", src_img);
//vectors for corners
vector<Point2f> origin;
vector<Point2f> dest;
//output image size
int w = src_img.cols;
int h = src_img.rows;
//specifing roi polygon
origin.clear();
origin.push_back(Point2f(w / 2 - 20, h / 2 - 20)); //lt
origin.push_back(Point2f(w / 2 + 20, h / 2 - 100)); //rt
origin.push_back(Point2f(w / 2 - 20, h / 2 + 20)); //lb
origin.push_back(Point2f(w / 2 + 20, h / 2 + 20)); //rb
//resut storage
Mat result(w, h, CV_8UC4);
//specifing area, where we want to place warped roi
dest.clear();
dest.push_back(Point2f(0, 0));
dest.push_back(Point2f(w / 2, 0));
dest.push_back(Point2f(0, h / 2));
dest.push_back(Point2f(w / 2, h / 2));
//creating transform matrix
Mat warpMatrix = getPerspectiveTransform(origin, dest);
//warping and getting result
warpPerspective(src_img, result, warpMatrix, Size(w / 2, h / 2));
imshow("result", result);
//create a black image and merge images into one
Mat sum(w, h, CV_8UC4, Scalar(0, 0, 0));
src_img.copyTo(sum);
result.copyTo(sum(Rect(40, 80, result.cols, result.rows)));
imshow("final", sum);

Related

How to calculate the distance of two circles in a image by opencv

image with two circles
I have an image that include two fibers (presenting as two circles in the image). How can I calculate the distance of two fibers?
I find it hard to detect the position of the fiber. I have tried to use the HoughCircles function, but the parameters are hard to optimize and it cannot locate the circle precisely in most times. Should I subtract the background first or is there any other methods? MANY Thanks!
Unfortunately, you haven't shown your preprocessing steps. In my approach, I'll do the following:
Convert input image to grayscale (see cvtColor).
Median blurring, maintains the "edges" (see medianBlur).
Adaptive thresholding (see adaptiveTreshold).
Morphological opening to get rid of small noise (see morphologyEx).
Find circles by HoughCircles.
Not done here: Possible refinements of the found circles. Exclude too small or too large circles. Use all prior information you have on that! For example, how large can the circles be at all?
Here's my whole code:
// Read image.
cv::Mat img = cv::imread("images/i7aJJ.jpg", cv::IMREAD_COLOR);
// Convert to grayscale for processing.
cv::Mat blk;
cv::cvtColor(img, blk, cv::COLOR_BGR2GRAY);
// Median blurring to improve following thresholding.
cv::medianBlur(blk, blk, 11);
// Adaptive thresholding.
cv::adaptiveThreshold(blk, blk, 255, cv::ADAPTIVE_THRESH_GAUSSIAN_C, cv::THRESH_BINARY, 51, -2);
// Morphological opening to get rid of small noise.
cv::morphologyEx(blk, blk, cv::MORPH_OPEN, cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(3, 3)));
// Find circles using Hough transform.
std::vector<cv::Vec4f> circles;
cv::HoughCircles(blk, circles, cv::HOUGH_GRADIENT, 1.0, 300, 50, 25, 100);
// TODO: Refinement of found circles, if there are more than two.
// For example, calculate areas: Neglect too small or too large areas.
// Compare all areas, and keep the two with nearly matching areas and
// suitable areas.
// Draw circles in input image.
for (Vec4f& circle : circles) {
cv::circle(img, cv::Point(circle[0], circle[1]), circle[2], cv::Scalar(0, 0, 255), 4);
cv::circle(img, cv::Point(circle[0], circle[1]), 5, cv::Scalar(0, 255, 0), cv::FILLED);
}
// --- Assuming there are only the two right circles left from here. --- //
// Draw some debug output in input image.
const cv::Point c1 = cv::Point(circles[0][0], circles[0][1]);
const cv::Point c2 = cv::Point(circles[1][0], circles[1][1]);
cv::line(img, c1, c2, cv::Scalar(255, 0, 0), 2);
// Calculate distance, and put in input image.
double dist = cv::norm(c1 - c2);
cv::putText(img, std::to_string(dist), cv::Point((c1.x + c2.x) / 2 + 20, (c1.y + c2.y) / 2 + 20), cv::FONT_HERSHEY_COMPLEX, 1.0, cv::Scalar(255, 0, 0));
The final output looks like this:
The intermediate image right before the HoughCircles operation looke like this:
In general, I'm not that skeptical about HoughCircles. You "just" have to pay attention to your preprocessing.
Hope that helps!
It's possible using hough circle detection but you should provide more images if you want a more stable detection. I just do denoising and go straight to circle detection. Using a non-local means denoising is pretty good at preserving edges which is in turn good for the canny edge algorithm included in the hough circle algorithm.
My code is written in Python but can easily be translated into C++.
import cv2
from matplotlib import pyplot as plt
IM_PATH = 'your image path'
DS = 2 # downsample the image
orig = cv2.imread(IM_PATH, cv2.IMREAD_GRAYSCALE)
orig = cv2.resize(orig, (orig.shape[1] // DS, orig.shape[0] // DS))
img = cv2.fastNlMeansDenoising(orig, h=3, templateWindowSize=20 // DS + 1, searchWindowSize=40 // DS + 1)
plt.imshow(orig, cmap='gray')
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, dp=1, minDist=200 // DS, param1=40 // DS, param2=40 // DS, minRadius=210 // DS, maxRadius=270 // DS)
if circles is not None:
for x, y, r in circles[0]:
c = plt.Circle((x, y), r, fill=False, lw=1, ec='C1')
plt.gca().add_patch(c)
plt.gcf().set_size_inches((12, 8))
plt.show()
Important
Doing a bit of image processing is only the first step in a good (and stable!) object detection. You have to leverage every detail and property that you can get your hands on and apply some statistics to improve your results. For example:
Use Yves' approach as an addition and filter all detected circles that do not intersect the joints.
Is one circle always underneath the other? Filter out horizontally aligned pairs.
Can you reduce the ROI (are the circles always in a specific area in your image or can they be everywhere)?
Are both circles always the same size? Filter out pairs with different sizes.
...
If you can use multiple metrics you can apply a statistical model (ex. majority voting or knn) to find the best pair of circles.
Again: always think of what you know about your object, the environment and its behavior and take advantage of that knowledge.

triangle mask with opencv

i have this image
i want to create a tranigle mask to get only this zone
but with the following code i get this result
Moments mu = moments(red,true);
Point center;
center.x = mu.m10 / mu.m00;
center.y = mu.m01 / mu.m00;
circle(red, center, 2, Scalar(0, 0, 255));
cv::Size sz = red.size();
int imageWidth = sz.width;
int imageHeight = sz.height;
Mat mask3(red.size(), CV_8UC1, Scalar::all(0));
// Create Polygon from vertices
vector<Point> ptmask3(3);
ptmask3.push_back(Point(imageHeight-1, imageWidth-1));
ptmask3.push_back(Point(center.x, center.y));
ptmask3.push_back(Point(0, red.rows - 1));
vector<Point> pt;
approxPolyDP(ptmask3, pt, 1.0, true);
// Fill polygon white
fillConvexPoly(mask3, &pt[0], pt.size(), 255, 8, 0);
// Create new image for result storage
Mat hide3(red.size(), CV_8UC3);
// Cut out ROI and store it in imageDest
red.copyTo(hide3, mask3);
imshow("mask3", hide3);
Updated Version (with the Help of Dan MaĊĦek)
Your Triangle is wrong
This is because you're initializing the vector with size 3, then putting another three points into it, for a total of 6 points of which three have default values. Try this instead:
vector<Point> ptmask3;
Also, make sure that the coordinates of the points are correct. You'll want to have a point in the bottom left corner, but it doesn't seem like your current triangle has one like that.
Your image is gray
You need to initialize hide3 properly, like this:
cv::Mat hide3(img.size(), CV_8UC3, cv::Scalar(0));

Change lightness to pixels which are outside of the area

I have one point set to position (x,y) and two angles from this point. I draw in example bellow two lines for demonstration, how it should look.
Now what I want is change lightness to all pixels outside from this lines.
Here is original image.
And here is example, what I want.
How can I easy change pixels with Opencv(C++), if I have and know input image, point, and two angles? I know many of solution, but I want easiest one, how can detect which pixels need change and which not.
One way would be to:
Make a binary mask of the size of the original image, based on your points and angle (i.e draw filled polygon).
Make a clone of the original image. Apply brightness changes to the whole of cloned image.
Copy cloned image back to original image based on the mask.
I write code bellow from #Zindarod steps. Hope to help someone.
Angles are in degress.
void view(cv::Mat& frame, double angle_left, double angle_right, cv::Point center){
int length = 1500;
cv::Point left_view;
left_view.x = (int)round(center.x + length * cos((angle_left * (CV_PI / 180))));
left_view.y = (int)round(center.y + length * sin((angle_left * (CV_PI / 180))));
cv::Point right_view;
right_view.x = (int)round(center.x + length * cos((angle_right * (CV_PI / 180))));
right_view.y = (int)round(center.y + length * sin((angle_right * (CV_PI / 180))));
cv::Point pts[4] = { position_of_eyes, left_view, right_view, position_of_eyes };
Mat mask = Mat(frame.size(), CV_32FC3, cv::Scalar(1.0, 1.0, 0.3));
cv::fillConvexPoly(mask, pts, 3, cv::Scalar(1.0,1.0,1.0));
cv::cvtColor(frame, frame, CV_BGR2HSV);
frame.convertTo(frame, CV_32FC3);
cv::multiply(frame, mask, frame);
frame.convertTo(frame, CV_8UC3);
cv::cvtColor(frame, frame, CV_HSV2BGR);
}
Given an origin point and two angles, you can calculate 2 unit vectors for you two lines, let these be unitA and unitB.
For each pixel of the image do these steps:
1. get a vector (called vec) from the origin to the pixel.
2. find the angle (ang) between vec and a reference vector (refVec).
3. if ang is greater than the angle between refVec and unitA, but smaller than the angle between the refVec and unitB recolor the pixel.

creating a bounding box around a field of optical flow paths

I have used cv::calcOpticalFlowFarneback to calculate the optical flow in the current and previous frames of video with ofxOpenCv in openFrameworks.
I then draw the video with the optical flow field on top and then draw vectors showing the flow of motion in areas that are above a certain threshold.
What I want to do now is create a bounding box of those areas of motion and get the centroid and store that x,y position in a variable for tracking.
This is how I'm drawing my flow field if that helps.
if (calculatedFlow){
ofSetColor( 255, 255, 255 );
video.draw( 0, 0);
int w = gray1.width;
int h = gray1.height;
//1. Input images + optical flow
ofPushMatrix();
ofScale( 4, 4 );
//Optical flow
float *flowXPixels = flowX.getPixelsAsFloats();
float *flowYPixels = flowY.getPixelsAsFloats();
ofSetColor( 0, 0, 255 );
for (int y=0; y<h; y+=5) {
for (int x=0; x<w; x+=5) {
float fx = flowXPixels[ x + w * y ];
float fy = flowYPixels[ x + w * y ];
//Draw only long vectors
if ( fabs( fx ) + fabs( fy ) > .5 ) {
ofDrawRectangle( x-0.5, y-0.5, 1, 1 );
ofDrawLine( x, y, x + fx, y + fy );
}
}
}
}
For what you are asking, there is no simple answer. Here is a suggested solution. It involves multiple steps, but if your domain is simple enough, you could simplify this.
For each frame,
Calculate flow as two images flow_x,flow_y comparing current frame with previous frame using farneback method.(you seem to be doing this, in your code)
Translate the flow images into an hsv image, where the hue component of each pixel denotes the angle of the flow atan2(flow_y/flow_x) and value component of each pixel denotes the magnitude of the flow sqrt(flow_x\*\*2 + flow_y\*\*2)
In the above step, use your thresholding mechanism to suppress flow- pixels (make them black) whose magnitude falls below a certain threshold.
Segment the HSV image based on color ranges. You could use apriori information about your domain, or you could take histogram of hue components and identify prominent ranges of hues to classify pixels. As a result of this step, you can assign a class to each pixel.
Separate the pixels belonging to each class into multiple images. All pixels belonging to segmented class-1 will goto image-1, all pixels belonging to segmented class-2 will go to image-2 etc. Now each segmented image contains pixels in the HSV image, in a particular color range.
Transform each segmented image as a black and white image, and using opencv's morphological operations split into multiple regions using connectivity. (connected components).
Find the centroid of each connected component.
I found this reference to be helpful in this context.
I resolved my problem by creating new image from my flowX, and flowY. This was done by adding flowX and flowY to a new CV FloatImage.
flowX +=flowY;
flowXY = flowX;
Then I was able to do the contour finding from the pixels of the newly created image and then I could store all the centroids of all the blobs of movement.
Like so:
contourFinder.findContours( mask, 10, 10000, 20, false );
//Storing the objects centers with contour finder.
vector<ofxCvBlob> &blobs = contourFinder.blobs;
int n = blobs.size(); //Get number of blobs
obj.resize( n ); //Resize obj array
for (int i=0; i<n; i++) {
obj[i] = blobs[i].centroid; //Fill obj array
}
I initially noticed that movement was only being tracked in one direction in the x-axis and y-axis because of negative values. I resolved this by changing the calculation for my optical flow by calling the abs() function in cv::Mat.
Mat img1( gray1.getCvImage() ); //Create OpenCV images
Mat img2( gray2.getCvImage() );
Mat flow;
calcOpticalFlowFarneback( img1, img2, flow, 0.7, 3, 11, 5, 5, 1.1, 0 );
//Split flow into separate images
vector<Mat> flowPlanes;
Mat newFlow;
newFlow = abs(flow); //abs flow so values are absolute. Allows tracking in both directions.
split( newFlow, flowPlanes );
//Copy float planes to ofxCv images flowX and flowY
IplImage iplX( flowPlanes[0] );
flowX = &iplX;
IplImage iplY( flowPlanes[1] );
flowY = &iplY;

How to detect the Sun from the space sky in OpenCv?

I need to detect the Sun from the space sky.
These are examples of the input images:
I've got such results after Morphologic filtering ( open operation for twice )
Here's the algorithm code of this processing:
// Color to Gray
cvCvtColor(image, gray, CV_RGB2GRAY);
// color threshold
cvThreshold(gray,gray,150,255,CV_THRESH_BINARY);
// Morphologic open for 2 times
cvMorphologyEx( gray, dst, NULL, CV_SHAPE_RECT, CV_MOP_OPEN, 2);
Isn't it too heavy processing for such a simple task? And how to find the center of the Sun? If I find white points, than I'll find white points of big Earth ( left top corner on first example image )
Please advise me please my further action to detect the Sun.
UPDATE 1:
Trying algorithm of getting centroid by formula : {x,y} = {M10/M00, M01/M00}
CvMoments moments;
cvMoments(dst, &moments, 1);
double m00, m10, m01;
m00 = cvGetSpatialMoment(&moments, 0,0);
m10 = cvGetSpatialMoment(&moments, 1,0);
m01 = cvGetSpatialMoment(&moments, 0,1);
// calculating centroid
float centroid_x = m10/m00;
float centroid_y = m01/m00;
cvCircle( image,
cvPoint(cvRound(centroid_x), cvRound(centroid_y)),
50, CV_RGB(125,125,0), 4, 8,0);
And where Earth is in the photo, I got such a result:
So, centroid is on the Earth. :(
UPDATE 2:
Trying cvHoughCircles:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* circles = cvHoughCircles(dst, storage, CV_HOUGH_GRADIENT, 12,
dst->width/2, 255, 100, 0, 35);
if ( circles->total > 0 ) {
// getting first found circle
float* circle = (float*)cvGetSeqElem( circles, 0 );
// Drawing:
// green center dot
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
3, CV_RGB(0,255,0), -1, 8, 0 );
// wrapping red circle
cvCircle( image, cvPoint(cvRound(circle[0]),cvRound(circle[1])),
cvRound(circle[2]), CV_RGB(255,0,0), 3, 8, 0 );
}
First example: bingo, but the second - no ;(
I've tried different configuration of cvHoughCircles() - couldn't find configuration to fit every my example photo.
UPDATE3:
matchTemplate approach worked for me ( response of mevatron ). It worked with big number of tests.
How about trying a simple matchTemplate approach. I used this template image:
And, it detected the 3 out of 3 of the sun images I tried:
This should work due to the fact that circles (in your case the sun) are rotationally invariant, and since you are so far away from the sun it should be roughly scale invariant as well. So, template matching will work quite nicely here.
Finally, here is the code that I used to do this:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
/// Load image and template
string inputName = "sun2.png";
string outputName = "sun2_detect.png";
Mat img = imread( inputName, 1 );
Mat templ = imread( "sun_templ.png", 1 );
/// Create the result matrix
int result_cols = img.cols - templ.cols + 1;
int result_rows = img.rows - templ.rows + 1;
Mat result( result_cols, result_rows, CV_32FC1 );
/// Do the Matching and Normalize
matchTemplate(img, templ, result, CV_TM_CCOEFF);
normalize(result, result, 0, 1, NORM_MINMAX, -1, Mat());
Point maxLoc;
minMaxLoc(result, NULL, NULL, NULL, &maxLoc);
rectangle(img, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
rectangle(result, maxLoc, Point( maxLoc.x + templ.cols , maxLoc.y + templ.rows ), Scalar(0, 255, 0), 2);
imshow("img", img);
imshow("result", result);
imwrite(outputName, img);
waitKey(0);
return 0;
}
Hope you find that helpful!
Color Segmentation Approach
Do a color segmentation on the images to identify objects on the black background. You may identify the sun according to its area (given this uniquely identifies it, resp. don't varies largely accross images).
A more sophisticated approach could compute image moments, e.g. hu moments of the objects. See this page for these features.
Use a classification algorithm of your choice to do the actual classification of the objects found. The most simple approach is to manually specify thresholds, resp. value ranges that turn out to work for all(most) of your object/image combinations.
You may compute the actual position from the raw moments, as for the circular sun the position is equal to the center of mass
Centroid: {x, y } = { M10/M00, M01/M00 }
Edge Map Approach
Another option would be a circle hough transformation of the edge map, this will hopefully return some candidate circles (by position and radius). You may select the sun-circle according to the radius you expect (if you are lucky there is at most one).
A simple addition to your code is to filter out objects based on their size. If you always expect the earth to be much bigger than the sun, or the sun to have almost the same area in each picture, you can filter it by area.
Try Blob detector for this task.
And note that it may be good to apply a morphological opening/closing instead of simple erode or dilate, so your sun will have almost the same area before and after processing.