I'm trying to analyse some images which have a lot of noise around the outside of the image, but a clear circular centre with a shape inside. The centre is the part I'm interested in, but the outside noise is affecting my binary thresholding of the image.
To ignore the noise, I'm trying to set up a circular mask of known centre position and radius whereby all pixels outside this circle are changed to black. I figure that everything inside the circle will now be easy to analyse with binary thresholding.
I'm just wondering if someone might be able to point me in the right direction for this sort of problem please? I've had a look at this solution: How to black out everything outside a circle in Open CV but some of my constraints are different and I'm confused by the method in which source images are loaded.
Thank you in advance!
//First load your source image, here load as gray scale
cv::Mat srcImage = cv::imread("sourceImage.jpg", CV_LOAD_IMAGE_GRAYSCALE);
//Then define your mask image
cv::Mat mask = cv::Mat::zeros(srcImage.size(), srcImage.type());
//Define your destination image
cv::Mat dstImage = cv::Mat::zeros(srcImage.size(), srcImage.type());
//I assume you want to draw the circle at the center of your image, with a radius of 50
cv::circle(mask, cv::Point(mask.cols/2, mask.rows/2), 50, cv::Scalar(255, 0, 0), -1, 8, 0);
//Now you can copy your source image to destination image with masking
srcImage.copyTo(dstImage, mask);
Then do your further processing on your dstImage. Assume this is your source image:
Then the above code gives you this as gray scale input:
And this is the binary mask you created:
And this is your final result after masking operation:
Since you are looking for a clear circular center with a shape inside, you could use Hough Transform to get that area- a careful selection of parameters will help you get this area perfectly.
A detailed tutorial is here:
http://docs.opencv.org/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html
For setting pixels outside a region black:
Create a mask image :
cv::Mat mask(img_src.size(),img_src.type());
Mark the points inside with white color :
cv::circle( mask, center, radius, cv::Scalar(255,255,255),-1, 8, 0 );
You can now use bitwise_AND and thus get an output image with only the pixels enclosed in mask.
cv::bitwise_and(mask,img_src,output);
Related
First of all: I'm new to opencv :-)
I want to perform a vignetting correction on a 24bit RGB Image. I used an area scan camera as a line camera and put together an image from 1780x2 px parts to get an complete image with 1780x3000 px. Because of the vignetting, i made a white reference picture with 1780x2 px to calculate a LUT (with correction factor in it) for the vignetting removal. Here is my code idea:
Mat white = imread("WHITE_REF_2L.bmp", 0);
Mat lut(2, 1780, CV_8UC3, Scalar(0));
lut = 255 / white;
imwrite("lut_test.bmp", lut*white);
As i understood, what the second last line will (hopefully) do, is to divide 255 with every intensity value of every channel and store this in the lut matrice.
I want to use that lut then to to calculate the “real” (not distorted) intensity
level of each pixel by multiplying every element of the src img with every element of the lut matrice.
obviously its not working how i want to do it, i get a memory exception.
Can anybody help me with that problem?
edit: i'm using opencv 3.1.0 and i solved the problem like this:
// read white reference image
Mat white = imread("WHITE_REF_2L_D.bmp", IMREAD_COLOR);
white.convertTo(white, CV_32FC3);
// calculate LUT with vignetting correction factors
Mat vLUT(2, 1780, CV_32FC3, Scalar(0.0f));
divide(240.0f, white, vLUT);
of course that's not optimal, i will read in more white references and calculate the mean value to optimize.
Here's the 2 lines white reference, you can see the shadows at the image borders i want to correct
when i multiply vLUT with the white reference i obviously get a homogenous image as the result.
thanks, maybe this can help anyone else ;)
update - in the process of troubleshooting, trial and error I got my mats mixed up. Its working now
I was wondering if anyone can help troubleshoot my program. Here's some background. I'm in the early stages of writing a program that will detect parts along a conveyor belt for an automated pick and place robot. Right now I'm trying differentiate an upside down part to a right side up part. I figured I can do this by querying pixels at a certain radius and angle from the part center point. (ie, if pixels 1,2, and 3 are black, part must be upside down)
Here is a screenshot of what I have so far:
Here is a screenshot of it sometimes working:
The goal here is if the pixel is black, draw a black circle, if white, draw a green circle. As you can see, all the circles are black. I can occasionally get a green circle when the pixel im querying lays on the part edge, but not in the middle. To the right, you can see the b/w image that I am reading, named "filter".
Here is the code for one of the circles. Radian is the orientation of the part, and radius is the distance away from the part centerpoint for my pixel query.
// check1 is the coordinate for 1st pixel query
Point check1 = Point(pos.x + radius * cos(radian), pos.y + radius* sin (radian)); Scalar intensity1 = filter.at<uchar>(Point(check1)); // pixel should be 0-255
if (intensity1.val[0]>50)
{
circle(screenshot, check1, 8, CV_RGB(0, 255, 0), 2); //green circle
}
else
{
circle(screenshot, check1, 8, CV_RGB(0, 0, 0), 2); //black circle
}
Here are the steps of how I created filter mat
Webcam stream -
cvtcolor (bgr2hsv) -
GaussianBlur -->
inRange (becomes 8u I think) -
erode -
dilate =
Filter mat (this is the mat that I use findContours on and also the pixel querying.)
I've also tried making a copy of filter, and using findContours and pixel querying on separate mats with no luck. I'm not sure what is wrong. Any suggestions is appreciated. I have a feeling that I might be using the incorrect mat format.
I have done a stereo calibration and I got the validPixROI1 and 2 (green border). Now I want to use StereoSGBM but the rois from calibration (from stereoRectify) are not the same size. Anyone know how to solve this?
Actually I do somethine linke this:
Rect roiLeft(...);
Rect roiRight(...);
Mat cLeft(rLeft, roiLeft);
//Mat cRight(rRight, roiRight); // not same size...
Mat cRight(cRight, roiLeft);
stereoBM(cLeft,cRight, dst);
If I crop my images with that roi, will be the picture middle point be the same?
Here it works.
Why not run stereoBM on the (uncropped)calibrated images, then you can use those ROIs after to mask out the invalid bits of the result...
stereoBM(rLeft,rRight, disp);
//get intersection of both rois or use target image roi, if you know the target image
cv::Rect visibleRoi = roiLeft & roiRight;
cv::Mat cDisp(disp,visibleRoi);
Now you have no issues with different size inputs, or different centers and such.
Cheers
According to wiki
A point R at the intersection of the optical axis and the image plane. This point is referred to as the principal point or image center.
So I don't think the center will be same.
Refer to this site . Here in one of the examples the principal point is 302.71656,242.33386 for a 640x480 pixel camera which shows that the principal point and the image center are not the same.
Run the block matcher on the uncropped rectified images and then use.
cv::getValidDisparityROI(roi1, roi2, minDisparity, numberOfDisparities, SADWindowSize);
That call returns a cv::Rect that will be a bounding box for all the valid pixels in the left image and the disparity map. The valid pixels are only pixels that both cameras can "see" (caveat on occluded edges).
Once you have the disparity map the right image becomes useless.
Be aware that the roi's returned from stereoRectify are just valid pixels after the remap from the cameras intrinsics.
Ok sorry for asking pretty much the same question again but I've tried many methods and I still can't do what I'm trying to do and I'm not even sure it's possible with opencv alone.
I have rotated an image and I want to copy it inside another image. The problem is that no matter what way I crop this rotated image it always copies inside this second image with a non rotated square around it. As can be seen in the image below.(Forget the white part thats ok). I just want to remove the striped part.
I believe my problem is with my ROI that I copy the image to as this ROI is a rect and not a RotatedRect. As can be seen in the code below.
cv::Rect roi(Pt1.x, Pt1.y, ImageAd.cols, ImageAd.rows);
ImageAd.copyTo(ImageABC(roi));
But I can't copyTo with a rotatedRect like in the code below...
cv::RotatedRect roi(cent, sizeroi, angled);
ImageAd.copyTo(ImageABC(roi));
So is there a way of doing what I want in opencv?
Thanks!
After using method below with masks I get this image which as seen is cut off by the roi in which I use to say where in the image I want to copy my rotated image. Basically now that I've masked the image, how can I select where to put this masked image into my second image. At the moment I use a rect but that won't work as my image is no longer a rect but a rotated rect. Look at the code to see how I wrongly do it at the moment (it cuts off and if I make the rect bigger an exception is thrown).
cv::Rect roi(Pt1.x, Pt1.y, creditcardimg.cols, creditcardimg.rows);
creditcardimg.copyTo(imagetocopyto(roi),mask);
Instead of ROI you can use mask to copy,
First create mask using rotated rect.
Copy your source image to destination image using this mask
See below C++ code
Your rotated rect and I calculated manually.
RotatedRect rRect = RotatedRect(Point2f(140,115),Size2f(115,80),192);
Create mask using draw contour.
Point2f vertices[4];
rRect.points(vertices);
Mat mask(src.rows, src.cols, CV_8UC1, cv::Scalar(0));
vector< vector<Point> > co_ordinates;
co_ordinates.push_back(vector<Point>());
co_ordinates[0].push_back(vertices[0]);
co_ordinates[0].push_back(vertices[1]);
co_ordinates[0].push_back(vertices[2]);
co_ordinates[0].push_back(vertices[3]);
drawContours( mask,co_ordinates,0, Scalar(255),CV_FILLED, 8 );
Finally copy source to destination using above mask.
Mat dst;
src.copyTo(dst,mask);
I have been trying to detect the iris region of an eye and thereafter draw a circle around the detected area. I have managed to obtain a clear black and white eye image containing just the pupil, upper eyelid line and eyebrow using threshold function.
Once this is achieved HoughCircles is applied to detect if there are circles appearing in the image. However, it never detects any circular regions. After reading up on HoughCircles, it states that
the Hough gradient method works as follows:
First the image needs to be passed through an edge detection phase (in this case, cvCanny()).
I then added a canny detector after the threshold function. This still produced zero circles detected. If I remove the threshold function, the eye image becomes busy with unnecessary lines; hence I included it in.
cv::equalizeHist(gray, img);
medianBlur(img, img, 1);
IplImage img1 = img;
cvAddS(&img1, cvScalar(70,70,70), &img1);
//converting IplImage to cv::Mat
Mat imgg = cvarrToMat(&img1);
medianBlur(imgg, imgg, 1);
cv::threshold(imgg, imgg, 120, 255, CV_THRESH_BINARY);
cv::Canny(img, img, 0, 20);
medianBlur(imgg, imgg, 1);
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles(imgg, circles, CV_HOUGH_GRADIENT, 1, imgg.rows/8, 100, 30, 1, 5);
How can I overcome this problem?
Would hough circle method work?
Is there a better solution to detecting the iris region?
Are the parameters chosen correct?
Also note that the image is directly obtained from the webcam.
Try using Daugman's Integro differential operator. It calculates the centre of the iris and pupil and draws an accurate circle on the iris and pupil boundaries. The MATLAB code is available here iris boundary detection using Daugman's method. Since I'm not familiar with OpenCV you could convert it.
The binary eye image contained three different aspects eyelashes ,the eye and the eyebrow.The main aim is to get to the region of interest which is the eye/iris, excluding eyebrows and eyelashes.I followed these steps:-
Step 1: Discard the upper half of the eye image ,therefore we are left with eyelashes,eye region and small shadow regions .
Step 2:Find the contours
Step 3:Find largest contour so that we have just the eye region
Step 4:Use bounding box to create a rectangle around the eye area
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.html
Now we have the region of interest.From this point I am now exacting these images and using neural network to train the system to emulate properties of a mouse. Im currently learning about the neural network link1 and how to use it in opencv.
Using the previous methods which included detecting the iris point,creating an eye vector,tracking it and calculating the graze on the screen is time consuming .Also there is light reflected on the iris making it difficult to detect.