Drawing line with thickness become white, OpenCv iOS issue - c++

I'm developing an iOS app using the OpenCV framework,
the problem occurs when I try to draw something in my outputframe.
In the image below I'm trying to draw a line and a circle
cv::circle(outputFrame, startingPoint, 50, cv::Scalar(250,0,250), -1);
cv::line(outputFrame, startingPoint, endpoint, cv::Scalar(0,250,0),4);
The problem occurs when I increase the thickness of the line the line itself became white with a contour of green " color green = cv::Scalar(0,250,0)"
Any help is appreciated

Check here
The reason you are getting a white contour with green color is - it just behave as single contour with both circle and line. It doesn't get segmented.

Related

SFML cursor seems broken in linux

So, i have a cursor sprite:
I load this sprite with:
sf::Cursor cur;
sf::Image cur_default;
if(!cur_default.loadFromFile("assets/sprites/cursors/cursor_default.png"))
{
std::cerr << "ERROR: unable to load file 'assets/sprites/cursors/cursor_default.png'\n";
return -1;
}
cur.loadFromPixels(cur_default.getPixelsPtr(), cur_default.getSize(), cur_default.getSize() / 2U);
When i was testing my code, I realized that the left size of the sprite was white, and that the colors didn't show up (replaced the white/gray with blue). After resizing the image to 32x32 pixels, it seems as if there are random while lines in my cursor. Unfortunately, I could not take a screenshot of the behavior because it wasn't showing up. I'm aware that according to the SFML wiki:
On Unix, the pixels are mapped into a monochrome bitmap: pixels with an alpha channel to 0 are transparent, black if the RGB channel are close to zero, and white otherwise.
However, it states unix not unix like, so I assume that is not the case for linux, but feel free to correct me if I'm wrong. I'm confused on why there are random white lines in my sprite, and how to fix it (AND also why the colors are not working correctly).

Opencv, finding pixel value

update - in the process of troubleshooting, trial and error I got my mats mixed up. Its working now
I was wondering if anyone can help troubleshoot my program. Here's some background. I'm in the early stages of writing a program that will detect parts along a conveyor belt for an automated pick and place robot. Right now I'm trying differentiate an upside down part to a right side up part. I figured I can do this by querying pixels at a certain radius and angle from the part center point. (ie, if pixels 1,2, and 3 are black, part must be upside down)
Here is a screenshot of what I have so far:
Here is a screenshot of it sometimes working:
The goal here is if the pixel is black, draw a black circle, if white, draw a green circle. As you can see, all the circles are black. I can occasionally get a green circle when the pixel im querying lays on the part edge, but not in the middle. To the right, you can see the b/w image that I am reading, named "filter".
Here is the code for one of the circles. Radian is the orientation of the part, and radius is the distance away from the part centerpoint for my pixel query.
// check1 is the coordinate for 1st pixel query
Point check1 = Point(pos.x + radius * cos(radian), pos.y + radius* sin (radian)); Scalar intensity1 = filter.at<uchar>(Point(check1)); // pixel should be 0-255
if (intensity1.val[0]>50)
{
circle(screenshot, check1, 8, CV_RGB(0, 255, 0), 2); //green circle
}
else
{
circle(screenshot, check1, 8, CV_RGB(0, 0, 0), 2); //black circle
}
Here are the steps of how I created filter mat
Webcam stream -
cvtcolor (bgr2hsv) -
GaussianBlur -->
inRange (becomes 8u I think) -
erode -
dilate =
Filter mat (this is the mat that I use findContours on and also the pixel querying.)
I've also tried making a copy of filter, and using findContours and pixel querying on separate mats with no luck. I'm not sure what is wrong. Any suggestions is appreciated. I have a feeling that I might be using the incorrect mat format.

Adaptive Bilateral Filter to preserve edges

I need to be able to detect the edges of a card, currently it works when the background is non-disruptive and best when it is contrasting but still works pretty well on a non-contrasting background.
The problem occurs when the card is on a disruptive background the bilateral filter lets in too much noise and causes inaccurate edge detection.
Here is the code I am using:
bilateralFilter(imgGray, detectedEdges, 0, 175, 3, 0);
Canny( detectedEdges, detectedEdges, 20, 65, 3 );
dilate(detectedEdges, detectedEdges, Mat::ones(3,3,CV_8UC1));
The imgGray being the grayscale version of the original image.
Here are some tests on a disruptive background and the results (contact info distorted in all images):
Colored card:
Result:
And here is a white card:
Results:
Can anyone tell me how I can preserve the edges of the card no matter the background, color while removing the noise?
Find the edges using canny which you are already doing, then find the contour in image and find the suitable rectangle using bounding box and apply some threshold on the occupancy and the dimensions of rectangle. This should zero down to your rectangle i.e. your card edges and take it as ROI on which you can further work.

How to detect and draw a circle around the iris region of an eye?

I have been trying to detect the iris region of an eye and thereafter draw a circle around the detected area. I have managed to obtain a clear black and white eye image containing just the pupil, upper eyelid line and eyebrow using threshold function.
Once this is achieved HoughCircles is applied to detect if there are circles appearing in the image. However, it never detects any circular regions. After reading up on HoughCircles, it states that
the Hough gradient method works as follows:
First the image needs to be passed through an edge detection phase (in this case, cvCanny()).
I then added a canny detector after the threshold function. This still produced zero circles detected. If I remove the threshold function, the eye image becomes busy with unnecessary lines; hence I included it in.
cv::equalizeHist(gray, img);
medianBlur(img, img, 1);
IplImage img1 = img;
cvAddS(&img1, cvScalar(70,70,70), &img1);
//converting IplImage to cv::Mat
Mat imgg = cvarrToMat(&img1);
medianBlur(imgg, imgg, 1);
cv::threshold(imgg, imgg, 120, 255, CV_THRESH_BINARY);
cv::Canny(img, img, 0, 20);
medianBlur(imgg, imgg, 1);
vector<Vec3f> circles;
/// Apply the Hough Transform to find the circles
HoughCircles(imgg, circles, CV_HOUGH_GRADIENT, 1, imgg.rows/8, 100, 30, 1, 5);
How can I overcome this problem?
Would hough circle method work?
Is there a better solution to detecting the iris region?
Are the parameters chosen correct?
Also note that the image is directly obtained from the webcam.
Try using Daugman's Integro differential operator. It calculates the centre of the iris and pupil and draws an accurate circle on the iris and pupil boundaries. The MATLAB code is available here iris boundary detection using Daugman's method. Since I'm not familiar with OpenCV you could convert it.
The binary eye image contained three different aspects eyelashes ,the eye and the eyebrow.The main aim is to get to the region of interest which is the eye/iris, excluding eyebrows and eyelashes.I followed these steps:-
Step 1: Discard the upper half of the eye image ,therefore we are left with eyelashes,eye region and small shadow regions .
Step 2:Find the contours
Step 3:Find largest contour so that we have just the eye region
Step 4:Use bounding box to create a rectangle around the eye area
http://docs.opencv.org/doc/tutorials/imgproc/shapedescriptors/bounding_rects_circles/bounding_rects_circles.html
Now we have the region of interest.From this point I am now exacting these images and using neural network to train the system to emulate properties of a mouse. Im currently learning about the neural network link1 and how to use it in opencv.
Using the previous methods which included detecting the iris point,creating an eye vector,tracking it and calculating the graze on the screen is time consuming .Also there is light reflected on the iris making it difficult to detect.

how to cut Faces after detection

I have been working on a college project using OpenCV. I made a simple program which detects faces, by passing frames captured by a webcam in a function which detects faces.
On detection it draws black boxes on the faces when detected. However my project does not end here, i would like to be able to clip out those faces which get detected as soon as possible and save it in an image and then apply different image processing techniques [as per my need]. If this is too problematic i could use a simple image instead of using frames captured by a webcam.
I am just clueless about how to go about clipping those faces out that get detected.
For C++ version you can check this tutorial from OpenCV documentation.
In the function detectAndDisplay you can see the line
Mat faceROI = frame_gray( faces[i] );
where faceROI is clipped face and you can save it to file with imwrite function:
imwrite("face.jpg", faceROI);
http://nashruddin.com/OpenCV_Region_of_Interest_(ROI)
Check this link, you can crop the image using the dimensions of the black box, resize it and save as a new file.
Could you grab the frame and crop the photo with the X,Y coordinates of each corner?