Pixels at arrow tip missing when using antialiasing - c++

I am trying to draw an arrow with OpenCV 3.2:
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
int main()
{
Mat image(480, 640, CV_8UC3, Scalar(255, 255, 255)); //White background
Point from(320, 240); //Middle
Point to(639, 240); //Right border
arrowedLine(image, from, to, Vec3b(0, 0, 0), 1, LINE_AA, 0, 0.1);
imshow("Arrow", image);
waitKey(0);
return 0;
}
An arrow is drawn, but at the tip some pixels are missing:
To be more precise, two columns of pixels are not colored correctly (zoomed):
If I disable antialiasing, i.e., if I use
arrowedLine(image, from, to, Vec3b(0, 0, 0), 1, LINE_8, 0, 0.1);
instead (note the LINE_8 instead of LINE_AA), the pixels are there, albeit without antialiasing:
I am aware that antialiasing might rely on neighboring pixels, but it seems strange that pixels are not drawn at all at the borders instead of being drawn without antialiasing. Is there a workaround for this issue?
Increasing the X coordinate, e.g. to 640 or 641) makes the problem worse, i.e., more of the arrow head pixels disappear, while the tip still lacks nearly two complete pixel columns.
Extending and cropping the image would solve the neighboring pixels issue, but in my original use case, where the problem appeared, I cannot enlarge my image, i.e., its size must remain constant.

After a quick review, I've found that OpenCV draws AA lines using a Gaussian filter, which contracts the final image.
As I've suggested in comments, you can implement your own function for the AA mode (you can call the original one if AA is disabled) extending the points manually (see code below to have an idea).
Other option may be to increase the line width when using AA.
You may also simulate the AA effect of OpenCV but on the final image (may be slower but helpful if you have many arrows). I'm not an OpenCV expert so I'll write a general scheme:
// Filter radius, the higher the stronger
const int kRadius = 3;
// Image is extended to fit pixels that are not going to be blurred
Mat blurred(480 + kRadius * 2, 640 + kRadius * 2, CV_8UC3, Scalar(255, 255, 255));
// Points moved a according to filter radius (need testing, but the idea is that)
Point from(320, 240 + kRadius);
Point to(639 + kRadius * 2, 240 + kRadius);
// Extended non-AA arrow
arrowedLine(blurred, ..., LINE_8, ...);
// Simulate AA
GaussianBlur(blurred, blurred, Size(kRadius, kRadius), ...);
// Crop image (be careful, it doesn't copy data)
Mat image = blurred(Rect(kRadius, kRadius, 640, 480));
Another option may be to draw the arrow in an image twice as large and the scale it down with a good smoothing filter.
Obviously, last two options will work only if you don't have any previous data on the image. If so, then use a transparent image for temporal drawing and overlay it at the end.

Related

Excluding or skipping contrours in the corners of image

I have a camera under a glass with IR light to detect objects. I can find the contours and draw them using the following code (I just found some examples online and modified it to my need so I am not a master at all!).
using namespace cv;
cvtColor(mat, mat, COLOR_BGR2GRAY);
blur(mat, mat, Size(3,3));
erode(mat, mat, NULL, Point(-1,-1), 2);
dilate(mat, mat, NULL, Point(-1,-1), 2);
Canny(mat, mat, 100, 200);
auto contours = std::vector<std::vector<Point>>();
auto hierarchy = std::vector<Vec4i>();
findContours(mat, contours, hierarchy, CV_RETR_TREE,
CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
Mat drawing = Mat::zeros(mat.size(), CV_8UC3);
for( int i = 0; i< contours.size(); i++ ) {
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0,255),
rng.uniform(0,255));
drawContours(drawing, contours, i, color, 2, 8, hierarchy, 0, Point());
}
putText(mat,
(QString("Blobs: %1").arg(contours.size())).toStdString(),
Point(25,175), cv::FONT_HERSHEY_PLAIN, 10, CV_RGB(0, 0, 255), 2);
This code results in a nice finding of the contours that I am quite happy with. Except the fact that my IR light somehow makes artifacts at the corners and bottom of the image.
You can see that I have used gimp to highlight the areas that I want to ignore while searching for contours. Under the gray shade you see the white pixels that my original code detects as contours. These areas are problematic and I want to exclude them from the either contour search or contour drawing (whichever is easier!)
I was thinking of cropping the image to get the ROI but the cropping is a rectangle while I (for example) could have things to be detected i.e. exactly at leftmost area.
I think there should be some data in the contour that tells me where are the pixels but I could not figure it out yet...
The easiest way would be to simply crop the image. Areas of the image are known as ROIs in OpenCV, which stands for Region of Interest.
So, you could simply say
cv::Mat image_roi = image(cv::Rect(x, y, w, h));
This basically makes a rectangular crop, with the top left corner at x,y, width w and height h.
Now, you might not want to reduce the size of the image. The next easiest way is to remove the artifacts is to set the borders to 0. Using ROIs, of course:
image(cv::Rect(x, y, w, h)).setTo(cv::Scalar(0, 0, 0));
This sets a rectangular region to black. You then have to define the 4 rectangular regions on the borders of your image that you want dark.
Note that all of the above is based on manual tuning and some experimentation, and it would work provided that your system is static.

How to calculate the distance of two circles in a image by opencv

image with two circles
I have an image that include two fibers (presenting as two circles in the image). How can I calculate the distance of two fibers?
I find it hard to detect the position of the fiber. I have tried to use the HoughCircles function, but the parameters are hard to optimize and it cannot locate the circle precisely in most times. Should I subtract the background first or is there any other methods? MANY Thanks!
Unfortunately, you haven't shown your preprocessing steps. In my approach, I'll do the following:
Convert input image to grayscale (see cvtColor).
Median blurring, maintains the "edges" (see medianBlur).
Adaptive thresholding (see adaptiveTreshold).
Morphological opening to get rid of small noise (see morphologyEx).
Find circles by HoughCircles.
Not done here: Possible refinements of the found circles. Exclude too small or too large circles. Use all prior information you have on that! For example, how large can the circles be at all?
Here's my whole code:
// Read image.
cv::Mat img = cv::imread("images/i7aJJ.jpg", cv::IMREAD_COLOR);
// Convert to grayscale for processing.
cv::Mat blk;
cv::cvtColor(img, blk, cv::COLOR_BGR2GRAY);
// Median blurring to improve following thresholding.
cv::medianBlur(blk, blk, 11);
// Adaptive thresholding.
cv::adaptiveThreshold(blk, blk, 255, cv::ADAPTIVE_THRESH_GAUSSIAN_C, cv::THRESH_BINARY, 51, -2);
// Morphological opening to get rid of small noise.
cv::morphologyEx(blk, blk, cv::MORPH_OPEN, cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(3, 3)));
// Find circles using Hough transform.
std::vector<cv::Vec4f> circles;
cv::HoughCircles(blk, circles, cv::HOUGH_GRADIENT, 1.0, 300, 50, 25, 100);
// TODO: Refinement of found circles, if there are more than two.
// For example, calculate areas: Neglect too small or too large areas.
// Compare all areas, and keep the two with nearly matching areas and
// suitable areas.
// Draw circles in input image.
for (Vec4f& circle : circles) {
cv::circle(img, cv::Point(circle[0], circle[1]), circle[2], cv::Scalar(0, 0, 255), 4);
cv::circle(img, cv::Point(circle[0], circle[1]), 5, cv::Scalar(0, 255, 0), cv::FILLED);
}
// --- Assuming there are only the two right circles left from here. --- //
// Draw some debug output in input image.
const cv::Point c1 = cv::Point(circles[0][0], circles[0][1]);
const cv::Point c2 = cv::Point(circles[1][0], circles[1][1]);
cv::line(img, c1, c2, cv::Scalar(255, 0, 0), 2);
// Calculate distance, and put in input image.
double dist = cv::norm(c1 - c2);
cv::putText(img, std::to_string(dist), cv::Point((c1.x + c2.x) / 2 + 20, (c1.y + c2.y) / 2 + 20), cv::FONT_HERSHEY_COMPLEX, 1.0, cv::Scalar(255, 0, 0));
The final output looks like this:
The intermediate image right before the HoughCircles operation looke like this:
In general, I'm not that skeptical about HoughCircles. You "just" have to pay attention to your preprocessing.
Hope that helps!
It's possible using hough circle detection but you should provide more images if you want a more stable detection. I just do denoising and go straight to circle detection. Using a non-local means denoising is pretty good at preserving edges which is in turn good for the canny edge algorithm included in the hough circle algorithm.
My code is written in Python but can easily be translated into C++.
import cv2
from matplotlib import pyplot as plt
IM_PATH = 'your image path'
DS = 2 # downsample the image
orig = cv2.imread(IM_PATH, cv2.IMREAD_GRAYSCALE)
orig = cv2.resize(orig, (orig.shape[1] // DS, orig.shape[0] // DS))
img = cv2.fastNlMeansDenoising(orig, h=3, templateWindowSize=20 // DS + 1, searchWindowSize=40 // DS + 1)
plt.imshow(orig, cmap='gray')
circles = cv2.HoughCircles(img, cv2.HOUGH_GRADIENT, dp=1, minDist=200 // DS, param1=40 // DS, param2=40 // DS, minRadius=210 // DS, maxRadius=270 // DS)
if circles is not None:
for x, y, r in circles[0]:
c = plt.Circle((x, y), r, fill=False, lw=1, ec='C1')
plt.gca().add_patch(c)
plt.gcf().set_size_inches((12, 8))
plt.show()
Important
Doing a bit of image processing is only the first step in a good (and stable!) object detection. You have to leverage every detail and property that you can get your hands on and apply some statistics to improve your results. For example:
Use Yves' approach as an addition and filter all detected circles that do not intersect the joints.
Is one circle always underneath the other? Filter out horizontally aligned pairs.
Can you reduce the ROI (are the circles always in a specific area in your image or can they be everywhere)?
Are both circles always the same size? Filter out pairs with different sizes.
...
If you can use multiple metrics you can apply a statistical model (ex. majority voting or knn) to find the best pair of circles.
Again: always think of what you know about your object, the environment and its behavior and take advantage of that knowledge.

Compare image edges with margin in OpenCV

I have two almost similar images with the difference that the shapes in the second image are a little different. Most of the time smaller, but can be larger. Also the shape count in one image can range from ~10 to >100 and can get relatively close to each other.
It would look something like this (Notice: both images would be not transparent):
The black triangle is image 1, the grey triangle is image 2.
Now i want to add a predefined margin (3px here - to both sides of the contour) to the edges of image 1 and test if the edges of the second image are in "the same" range as the first image. If not, display that visually:
Top left: Small difference between the two images (visualized by red outline)
Bottom right: "Same" edge -> No difference
How can i best accomplish this?
I'm using OpenCV with C++
In case the shapes are at the same positions in both images and you just need the markers on an image without additional information, this simple trick could do it.
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img1 = imread("D:/1.png");
Mat img2 = imread("D:/2.png");
Mat diff;
absdiff(img1, img2, diff);
cv::threshold(diff, diff, 128, 255, THRESH_BINARY);
Mat markers;
int minRadiusDiff = 2;
erode(diff, markers, Mat(), cv::Point(-1, -1), minRadiusDiff / 2);
imwrite("D:/out.png", markers);
}
Here are some example images:
The triangle becomes much bigger, the wobbly thing becomes much smaller and the quad ony shrinks slightly.
So we would like to have the triangle and the wobble marked, but not the quad.
And that is exactly our result.

OpenCV+cvBlobsLib: blobs come out "stretched" on the x-axis

Making the usual blob tracker with OpenCV and cvBlobsLib, I've come across this problem and it seems no one else had it, which makes me sad. I get the RGB/BGR frame, choose the color to isolate, treshold it into b/w, find the blobs and add the bounding rectangle on each blob, but when I display the final image, the box is stretched on the x-axis: when the object is on the left the box is close to it (although around 2.5 times larger), and as it moves to the right the box moves faster (= more and more far from the object) until it reaches the right end of the window when the object isn't even halfway. This doesn't happen on the y-axis, where everything is fine. It's not a problem with rectangles, it happens when I use fillBlob aswell, the blob shape comes out stretched and misaligned. Also, it's not a problem related to image capturing, since I've tried with kinect (OpenNI), webcam and even using a single image (imread()), and I verified that every ImageGenerator, Mat, IplImage used were 640x480, 8bit depth, for which I used AUTOSIZE for the namedWindow (enlarging to fullscreen window doesn't help either). Showing the BGR frame and the tresholded image gives no problems, they both fit into the window, but the detected blobs seem to belong to a different resolution space when I merge them with the original image. Here's the code, not much has changed from the usual examples found online everywhere:
//[...]
namedWindow("Color Image", CV_WINDOW_AUTOSIZE);
namedWindow("Color Tracking", CV_WINDOW_AUTOSIZE);
//[...] I already got the two cv::Mat I need, imgBGR and imgTresh
CBlobResult blobs;
CBlob *currentBlob;
Point pt1, pt2;
Rect rect;
//had to do Mat to IplImage conversion, since cvBlobsLib doesn't like mats
IplImage iplTresh = imgTresh;
IplImage iplBGR = imgBGR;
blobs = CBlobResult(&iplTresh, NULL, 0);
blobs.Filter(blobs, B_EXCLUDE, CBlobGetArea(), B_LESS, 100);
int nBlobs = blobs.GetNumBlobs();
for (int i = 0; i < nBlobs; i++)
{
currentBlob = blobs.GetBlob(i);
rect = currentBlob->GetBoundingBox();
pt1.x = rect.x;
pt1.y = rect.y;
pt2.x = rect.x + rect.width;
pt2.y = rect.y + rect.height;
cvRectangle(&iplBGR, pt1, pt2, cvScalar(255, 255, 255, 0), 3, 8, 0);
}
//[...]
imshow("Color Image", imgBGR);
imshow("Color Tracking", imgTresh);
The "[...]" is code that shouldn't have nothing to do with this issue, but if you need further info on how I handled the images, let me know and I'll post it.
Based on the fact that the way I capture the image doesn't change anything, that BGR frame and B/W image are well shown, and that after getting blobs any way of displaying them gives the same (wrong) result, the problem must be something between CBlobResult() and matrix2ipl conversion, but I don't really know how to find it out.
Oh god, I spent ages to write the whole problem and the next day I found the answer almost casually. As I created the B/W matrix for tresholding, I didn't make it single-channel; I copied the BGR matrix type, thus having a treshold image with 3 channels which resulted in a widthStep 3 times the frame width. Resolved creating cv::Mat imgTresh with CV_8UC1 as type.

efficiently threshold red using HSV in OpenCV

I'm trying to threshold red pixels in a video stream using OpenCV. I have other colors working quite nicely, but red poses a problem because it wraps around the hue axis (ie. HSV(0, 255, 255) and HSV(179, 255, 255) are both red). The technique I'm using now is less than ideal. Basically:
cvInRangeS(src, cvScalar(0, 135, 135), cvScalar(20, 255, 255), dstA);
cvInRangeS(src, cvScalar(159, 135, 135), cvScalar(179, 255, 255), dstB);
cvOr(dstA, dstB, dst);
This is suboptimal because it requires a branch in the code for red (potential bugs), the allocation of two extra images, and two extra operations when compared to the easy case of blue:
cvInRangeS(src, cvScalar(100, 135, 135), cvScalar(140, 255, 255), dst);
The nicer alternative that occurred to me was to "rotate" the image's colors, so that the target hue is at 90 degrees. Eg.
int rotation = 90 - 179; // 179 = red
cvAddS(src, cvScalar(rotation, 0, 0), dst1);
cvInRangeS(dst1, cvScalar(70, 135, 135), cvScalar(110, 255, 255), dst);
This allows me to treat all colors similarly.
However, the cvAddS operation doesn't wrap the hue values back to 180 when they go below 0, so you lose data. I looked at converting the image to CvMat so that I could subtract from it and then use modulus to wrap the negative values back to the top of the range, but CvMat doesn't seem to support modulus. Of course, I could iterate over every pixel, but I'm concerned that that's going to be very slow.
I've read many tutorials and code samples, but they all seem to conveniently only look at ranges that don't wrap around the hue spectrum, or use solutions that are even uglier (eg. re-implementing cvInRangeS by iterating over every pixel and doing manual comparisons against a color table).
So, what's the usual way to solve this? What's the best way? What are the tradeoffs of each? Is iterating over pixels much slower than using built-in CV functions?
This is kind of late, but this is what I'd try.
Make the conversion: cvCvtColor(imageBgr, imageHsv, CV_RGB2HSV);
Note, RGB vs Bgr are purposefully being crossed.
This way, red color will be treated in a blue channel and will be centered around 170. There would also be a flip in direction, but that is OK as long as you know to expect it.
You can calculate Hue channel in range 0..255 with CV_BGR2HSV_FULL. Your original hue difference of 10 will become 14 (10/180*256), i.e. the hue must be in range 128-14..128+14:
public void inColorRange(CvMat imageBgr, CvMat dst, int color, int threshold) {
cvCvtColor(imageBgr, imageHsv, CV_BGR2HSV_FULL);
int rotation = 128 - color;
cvAddS(imageHsv, cvScalar(rotation, 0, 0), imageHsv);
cvInRangeS(imageHsv, cvScalar(128-threshold, 135, 135),
cvScalar(128+threshold, 255, 255), dst);
}
You won't believe but I had exactly the same issue and I solved it using simple iteration through Hue (not whole HSV) image.
Is iterating over pixels much slower than using built-in CV functions?
I've just tried to understood cv::inRange function but didn't get it at all (it seems that author used some specific iteration).
There is a really simple way of doing this.
First make two different color ranges
cv::Mat lower_red_hue_range;
cv::Mat upper_red_hue_range;
cv::inRange(hsv_image, cv::Scalar(0, 100, 100), cv::Scalar(10, 255, 255), lower_red_hue_range);
cv::inRange(hsv_image, cv::Scalar(160, 100, 100), cv::Scalar(179, 255, 255), upper_red_hue_range);
Then combine the two masks using addWeighted
cv::Mat red_hue_mask;
cv::addWeighted(lower_red_hue_range, 1.0, upper_red_hue_range, 1.0, 0.0, red_hue_mask);
Now you can just apply the mask to the image
cv::Mat result;
inputImageMat.copyTo(result, red_hue_mask);
I got the idea from a blog post I found
cvAddS(...) is equivalent, at element level, to:
out = static_cast<dest> ( in + shift );
This static_cast is the problem, because is clips/truncates the values.
A solution would be to shift the data from (0-180) to (x, 255), then apply a non-clipping add with overflow:
out = uchar(in + (255-180) + rotation );
Now you should be able to use a single InRange call, just shift your red interval according to the above formula