access pixel values inside the detected object / c++ - c++

If I could detect a circle by using canny edge detector, How can I have access to all values which are inside the circle?
void Canny(InputArray image, OutputArray edges, double threshold1, double threshold2, int apertureSize=3, bool L2gradient=false )
output of this function will give me the edge values which is detected by the edge detector, but what I want is all values inside the circle.
thanks in advance
------after editing.......
Mat mask = Mat::zeros(canny_edge.rows, canny_edge.cols, CV_8UC1);
Mat crop(main.rows, main.cols, CV_32FC1);
main.copyTo( crop, mask );
for(unsigned int y=0; y< height; y++)
for(unsigned int x=0; x< width; x++)
if(mask.at<unsigned char>(y,x) > 0)
{
}

For a circle, as asked in the original question:
first you want to detect the circle (e.g. by using hough circle detection methods). If you've done that you have some kind of circle center and a radius. Have a look at http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
After that you have to test whether a pixel is inside the circle. So one idea (and with openCV quite fast) is to draw a filled circle on a mask image and test for each pixel in the original image, whether the mask pixel at the same image coordinates is set (then the pixel is inside the object). This works for any other drawable object, draw it (filled) on the mask and test mask values.
Assuming you have a circle center and a radius, and the size of your original image is image_height x image_width, try this:
cv::Mat mask = cv::Mat::zeros(image_height,image_width, CV_8U);
cv::circle(mask, center, radius, cv::Scalar(255), -1);
for(unsigned int y=0; y<image_height; ++y)
for(unsigned int x=0; x<image_width; ++x)
if(mask.at<unsigned char>(y,x) > 0)
{
//pixel (x,y) in original image is within that circle so do whatever you want.
}
though it will be more efficient if you limit the mask region (circle center +/- radius in both dimensions) instead of looping over the whole image ;)

For circles you should use the Hough Circle Transform. From it you will get the centres and radii of circles in your image. A given pixel is inside a particular circle if its distance from the center is less than the radius of the circle.
For a general shape, use findCountours to get the outline of the shape, then you can usepointPolygonTest to determine wheether points are inside that shape. There is a tutorialon it.

Related

Can't detected bounding rect of id card

I want to detect the bounding rectangle of an German ID card within an image by using OpenCV.
This is what my code looks like:
capture >> frame;
cv::resize(frame, frame, cv::Size(512,256));
cv::Mat grayScaledFrame, blurredFrame, cannyFrame;
cv::cvtColor(frame, grayScaledFrame, cv::COLOR_BGR2GRAY);
cv::GaussianBlur(grayScaledFrame, blurredFrame, cv::Size(9,9), 1);
cv::Canny(blurredFrame, cannyFrame, 40, 70);
// CONTOURS
std::vector<std::vector<cv::Point>> contours;
cv::findContours(cannyFrame, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
// SORT
int maxArea = 0;
std::vector<cv::Point> contour;
for(int i = 0; i < contours.size(); i++) {
int thisArea = cv::contourArea(contours.at(i));
if(thisArea > maxArea) {
maxArea = thisArea;
contour = contours.at(i);
}
}
cv::Rect borderBox = cv::boundingRect(contour);
cv::rectangle(cannyFrame, borderBox, cv::Scalar{255, 32, 32}, 8);
cv::imshow("Webcam", cannyFrame);
The result looks like this:
RESULT
There are some rectangles detected but not the big one I'm interested in.
I've already tried different thresholds for Canny and also different kernel sizes for Gaussian Blur.
Best regards
First of all, as the environmental conditions change, the parameters of the code change, so it is necessary to standardize the environment (light, distance to the object, etc.).
To get this detection right, put the card at a fixed distance from the camera and calculate the area of the rectangles.
When the card is at a certain distance from the camera, you get approximate reference values of the card's area. Then, when drawing a rectangle, you use values within a specified tolerance range.

Coordinate transform from ROI in original image

I have a little problem with some projection and geometry. I have an image where I detect a square. After the square detection, I crop the square from image. In the ROI I detect the point P(x,y) (see the image below).
My problem is that I know the coordinate of point P in the ROI, the coordinates of A,B,C,D, and rotation of ROI (RotatedRect::angle) but I want to get the coordinate of P in original image. Any advice could help.
For ROI crop I have this code
vector< RotatedRect > rect(squares.size());
for (int i=0;i<squares.size();i++)
{
rect[i] = minAreaRect(Mat(squares[i]));
Mat M,rotated,cropped;
float angle = rect[i].angle;
Size rect_size = rect[i].size;
if (rect[i].angle<-45)
{
angle += 90;
swap(rect_size.width,rect_size.height);
}
M = getRotationMatrix2D(rect[i].center,angle,1.0);
warpAffine(cameraFeed,rotated,M,cameraFeed.size(),INTER_CUBIC);
getRectSubPix(rotated,rect_size,rect[i].center,cropped);
cropped.copyTo(SatelliteClass[i].m_matROIcropped);
SatelliteClass[i].m_vecRect = rect[i];
}
It's basically a question of vector addition. Take the inverse of M, apply it to P ( so you're rotating P back to the original frame ) and then add P to the left corner of the rectangle.
There might be a way to do this within the API you're using instead of reinventing the wheel.

Find 4 specific corner pixels and use them with warp perspective

I'm playing around with OpenCV and I want to know how you would build a simple version of a perspective transform program. I have a image of a parallelogram and each corner of it consists of a pixel with a specific color, which is nowhere else in the image. I want to iterate through all pixels and find these 4 pixels. Then I want to use them as corner points in a new image in order to warp the perspective of the original image. In the end I should have a zoomed on square.
Point2f src[4]; //Is this the right datatype to use here?
int lineNumber=0;
//iterating through the pixels
for(int y = 0; y < image.rows; y++)
{
for(int x = 0; x < image.cols; x++)
{
Vec3b colour = image.at<Vec3b>(Point(x, y));
if(color.val[1]==245 && color.val[2]==111 && color.val[0]==10) {
src[lineNumber]=this pixel // something like Point2f(x,y) I guess
lineNumber++;
}
}
}
/* I also need to get the dst points for getPerspectiveTransform
and afterwards warpPerspective, how do I get those? Take the other
points, check the biggest distance somehow and use it as the maxlength to calculate
the rest? */
How should you use OpenCV in order to solve the problem? (I just guess I'm not doing it the "normal and clever way") Also how do I do the next step, which would be using more than one pixel as a "marker" and calculate the average point in the middle of multiple points. Is there something more efficient than running through each pixel?
Something like this basically:
Starting from an image with colored circles as markers, like:
Note that is a png image, i.e. with a loss-less compression which preserves the actual color. If you use a lossy compression like jpeg the colors will change a little, and you cannot segment them with an exact match, as done here.
You need to find the center of each marker.
Segment the (known) color, using inRange
Find all connected components with the given color, with findContours
Find the largest blob, here done with max_element with a lambda function, and distance. You can use a for loop for this.
Find the center of mass of the largest blob, here done with moments. You can use a loop also here, eventually.
Add the center to your source vertices.
Your destination vertices are just the four corners of the destination image.
You can then use getPerspectiveTransform and warpPerspective to find and apply the warping.
The resulting image is:
Code:
#include <opencv2/opencv.hpp>
#include <vector>
#include <algorithm>
using namespace std;
using namespace cv;
int main()
{
// Load image
Mat3b img = imread("path_to_image");
// Create a black output image
Mat3b out(300,300,Vec3b(0,0,0));
// The color of your markers, in order
vector<Scalar> colors{ Scalar(0, 0, 255), Scalar(0, 255, 0), Scalar(255, 0, 0), Scalar(0, 255, 255) }; // red, green, blue, yellow
vector<Point2f> src_vertices(colors.size());
vector<Point2f> dst_vertices = { Point2f(0, 0), Point2f(0, out.rows - 1), Point2f(out.cols - 1, out.rows - 1), Point2f(out.cols - 1, 0) };
for (int idx_color = 0; idx_color < colors.size(); ++idx_color)
{
// Detect color
Mat1b mask;
inRange(img, colors[idx_color], colors[idx_color], mask);
// Find connected components
vector<vector<Point>> contours;
findContours(mask, contours, RETR_EXTERNAL, CHAIN_APPROX_NONE);
// Find largest
int idx_largest = distance(contours.begin(), max_element(contours.begin(), contours.end(), [](const vector<Point>& lhs, const vector<Point>& rhs) {
return lhs.size() < rhs.size();
}));
// Find centroid of largest component
Moments m = moments(contours[idx_largest]);
Point2f center(m.m10 / m.m00, m.m01 / m.m00);
// Found marker center, add to source vertices
src_vertices[idx_color] = center;
}
// Find transformation
Mat M = getPerspectiveTransform(src_vertices, dst_vertices);
// Apply transformation
warpPerspective(img, out, M, out.size());
imshow("Image", img);
imshow("Warped", out);
waitKey();
return 0;
}

Image copied in ROI doesn't follow camera c++. How to fix this?

I work on Windows7 x64 with opencv and Visual Studio 2010 on c++ language.
I created a project in which I show to my camera a rectangular area (call squared_surface). This area is recognized by tracing a rectangle with findSquare () and drawSquares () of opencv file squares.cpp.
On this rectangle I create a ROI and there I copy an image (let's call copied_image)
My problem is that when I rotate squared_surface (in front of camera), copied_image does not follow it.
I think I need to use the functions getPerpective () and warpPerspective (), but I do not know how. Can anyone help me?
Here's the code:
int main(){
vector<vector<Point> > squares;
cv::VideoCapture cap(0);
for (;;) {
cv::Mat image;
cap >> image;
findSquares(image, squares);
for (size_t i = 0; i < squares.size(); i++) {
Rect rectangle = boundingRect(Mat(squares[i]));
if((rectangle.width<=630)&& (rectangle.width >= 420) && (rectangle.height<= 490) &&(rectangle.height >= 250 )) {
cv::Size dsize = Size(rectangle.width, rectangle.height);
Mat img1 = imread("scacchiera.jpg");
cv::resize(img1,img1,dsize,0,0, INTER_LINEAR);
Rect roi (rectangle.x, rectangle.y,rectangle.width, rectangle.height);
Mat imageRoi(image, roi);
img1.copyTo(imageRoi);
}
}
drawSquares(image, squares);
imshow("camera",image);
if(waitKey(30) >= 0) break;
}
return 0;
}
Thanks!
EDIT.
I was thinking of rotating Copied_image, so it follows Squared_surface, but I need to calculate the angle of rotation of the rectangle identified by the camera (drawn in red in the above images). Is there a way to calculate this angle?
Or how can I do so that Copied_image follows Squared_surface when I rotate squared_surface?
Help me, please!
I think I found the bug. Rect rectangle = boundingRect(Mat(squares[i])); This is where the problem is. You are creating the variable rectangle as a bounding rectangle of the coordinates in squares[i]. So your code always tries to find out the bounding rectangle and not the actual rectangle.
Instead of using a bounding rectangle, try using a rotated rectangle. Here is how to use it: http://www710.univ-lyon1.fr/~eguillou/documentation/opencv2/classcv_1_1_rotated_rect.html
The rotated rectangle RotatedRect (const Point2f &_center, const Size2f &_size, float _angle) requires the center point location, floating point angle and the size. Since you have all the coordinates I think you can use basic math and trigonometry to calculate the center and the angle on how your rectangle should be rotated/orientated.
Let me know if this helps.

Drawing rects on certain pixels openCV

I'm trying to locate some regions of a frame, the frame is in Ycbcr color space. and I have to select those regions based on their Y values.
so I wrote this code:
Mat frame. ychannel;
VideoCapture cap(1);
int key =0;
int maxV , minV;
Point max, min;
while(key != 27){
cap >> frame;
cvtColor(frame,yframe,CV_RGB_YCrCb); // converting to YCbCr color space
extractChannel(yframe, yframe, 0); // extracting the Y channel
cv::minMaxLoc(yframe,&minV,&maxV,&min,&max);
cv::threshold(outf,outf,(maxV-10),(maxV),CV_THRESH_TOZERO);
/**
Now I want to use :
cv::rectangle()
but I want to draw a rect around any pixel (see the picture bellow)that's higher than (maxV-10)
and that during the streaming
**/
key = waitKey(1);
}
I draw this picture hopping that it helps to understand what I what to do .
thanks for your help.
Once you have applied your threshold you will end up with a binary image containing a number of connected components, if you want to draw a rectangle around each component then you first need to detect those components.
The OpenCV function findContours does just that, pass it your binary image, and it will provide you with a vector of vectors of points which trace the boundary of each component in your image.
cv::Mat binaryImage;
std::vector<std::vector<cv::Point>> contours;
cv::findContours(binaryImage, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE)
Then all you need to do is find the bounding rectangle of each of those sets of points and draw them to your output image.
for (int i=0; i<contours.size(); ++i)
{
cv::Rect r = cv::boundingRect(contours.at(i));
cv::rectangle(outputImage, r, CV_RGB(255,0,0));
}
You have to find the each of the connected components, and draw their bounding box.